4 research outputs found

    Spontaneous Facial Behavior Computing in Human Machine Interaction with Applications in Autism Treatment

    Get PDF
    Digital devices and computing machines such as computers, hand-held devices and robots are becoming an important part of our daily life. To have affect-aware intelligent Human-Machine Interaction (HMI) systems, scientists and engineers have aimed to design interfaces which can emulate face-to-face communication. Such HMI systems are capable of detecting and responding upon users\u27 emotions and affective states. One of the main challenges for producing such intelligent system is to design a machine, which can automatically compute spontaneous behaviors of humans in real-life settings. Since humans\u27 facial behaviors contain important non-verbal cues, this dissertation studies facial actions and behaviors in HMI systems. The main two objectives of this dissertation are: 1- capturing, annotating and computing spontaneous facial expressions in a Human-Computer Interaction (HCI) system and releasing a database that allows researchers to study the dynamics of facial muscle movements in both posed and spontaneous data. 2- developing and deploying a robot-based intervention protocol for autism therapeutic applications and modeling facial behaviors of children with high-functioning autism in a real-world Human-Robot Interaction (HRI) system. Because of the lack of data for analyzing the dynamics of spontaneous facial expressions, my colleagues and I introduced and released a novel database called Denver Intensity of Spontaneous Facial Actions (DISFA) . DISFA describes facial expressions using Facial Action Coding System (FACS) - a gold standard technique which annotates facial muscle movements in terms of a set of defined Action Units (AUs). This dissertation also introduces an automated system for recognizing DISFA\u27s facial expressions and dynamics of AUs in a single image or sequence of facial images. Results illustrate that our automated system is capable of computing AU dynamics with high accuracy (overall reliability ICC = 0.77). In addition, this dissertation investigates and computes the dynamics and temporal patterns of both spontaneous and posed facial actions, which can be used to automatically infer the meaning of facial expressions. Another objective of this dissertation is to analyze and compute facial behaviors (i.e. eye gaze and head orientation) of individuals in real-world HRI system. Due to the fact that children with Autism Spectrum Disorder (ASD) show interest toward technology, we designed and conducted a set of robot-based games to study and foster the socio-behavioral responses of children diagnosed with high-functioning ASD. Computing the gaze direction and head orientation patterns illustrate how individuals with ASD regulate their facial behaviors differently (compared to typically developing children) when interacting with a robot. In addition, studying the behavioral responses of participants during different phases of this study (i.e. baseline, intervention and follow-up) reveals that overall, a robot-based therapy setting can be a viable approach for helping individuals with autism

    Children-Robot Interaction: Eye Gaze Analysis of Children with Autism During Social Interactions

    Get PDF
    Background: Typical developing individuals utilize the direction of eye gaze and eye fixation/shifting as crucial elements to transmit socially relevant information (e.g. like, dislike) to others. Individuals with Autism Spectrum Disorder (ASD), deviant pattern of mutual eye gaze is a noticeable feature that may be one of the earliest (detectable) demonstrations of impaired social skills that would lead to other deficits in ASD Individuals (e.g. delaying development of social cognition and affective construal processes). This can significantly affect the quality of human’s social interactions. Recent studies reveal that children with ASD have superior engagement to the robot-based interaction, and it can effectively trigger positive behaviors (e.g. eye gaze attention). This suggests that interacting with robots may be a promising intervention approach for children with ASD. Objectives: The main objective of this multidisciplinary research is to utilize humanoid robot technology along with psychological and engineering sciences to better improve the social skills of children with High Functioning Autism (HFA). The designed intervention protocol focuses on different skillsets, such as eye gaze attention, joint attention, facial expression recognition and imitation. The current study is designed to evaluate the eye gaze patterns of children with ASD during verbal communication with a humanoid robot. Methods: Participants in this study are 13 male children ages 7-17 (M=11 years) diagnosed with ASD. The study employs NAO, an autonomous, programmable humanoid robot from Aldebaran Robotics to interact with ASD children in a series of conversations and interactive games across 3 sessions. During different game segments, NAO and children exchange stories and having conversation on different context. During every session of the game, four cameras which were installed in the video capturing room in addition to the NAO’s front-facing camera record the entire interaction. Videos were later score to analyze the gaze patterns of the children for two different context. Studying eye gaze fixation and eye gaze shifting while: 1) NAO is talking, 2) Kid is talking. Results: In order to analyze the eye gaze of participants, every frame of video was manually coded as Gaze Averted(‘0’) or Gaze At(‘1’) w.r.t NAO. To accurately analysis the gaze patterns of children during the conversation, the video segments of ‘NAO Talking’ and ‘Kid Talking’ have been selected. The averages of four measures were employed to report the static and dynamic properties of eye gaze patterns: 1) ‘NAO talking’: Gaze At NAO (GAN)= %55.3, Gaze Shifting (GS) =%3.4, GAN/GS = 34.10, Entropy GS: 0.20 2) ‘Kid talking’: GAN = %43.8, GS=%4.2, GAN/GS = 11.6, Entropy GS = 0.27 Conclusions: The results indicates that the children with ASD having more eye contact and less gaze shifting while NAO is talking (Higher GAN/GS and lower Entropy GS), however they prefer to shift their gaze more often and have less fixation on the robot as they are speaking. These results will serve as an important basis to significantly advance the emerging field of robot-assisted therapy for children with ASD

    Using Robots as Therapeutic Agents to Teach Children with Autism Recognize Facial Expression

    Get PDF
    Background: Recognizing and mimicking facial expressions are important cues for building great rapport and relationship in human-human communication. Individuals with Autism Spectrum Disorder (ASD) have often deficits in recognizing and mimicking social cues, such as facial expressions. In the last decade several studies have shown that individuals with ASD have superior engagement toward objects and particularly robots (i.e. humanoid and non-humanoid). However, majority of the studies have focused on investigating robot’s appearances and the engineering design concepts and very few research have been done on the effectiveness of robots in therapeutic and treatment applications. In fact, the critical question that “how robots can help individuals with autism to practice and learn some social communicational skills and applied them in their daily interactions” have not been addressed yet. Objective: In a multidisciplinary research study we have explored how robot-based therapeutic sessions can be effective and to some extent they can improve the social-experiences of children with ASD. We developed and executed a robot-based multi-session therapeutic protocol which consists of three phases (i.e. baseline, Intervention and human-validation sessions) that can serve as a treatment mechanism for individuals with ASD. Methods: We recruited seven (2F/5M) children 6-13 years old (Mean=10.14 years), diagnosed with High Functioning Autism (HFA). We employed NAO, an autonomous programmable humanoid robot, to interact with children in a series of social games for several sessions. We captured all the visual and audio communications between NAO and the child using multiple cameras. All the capturing devices were connected to a monitoring system outside of the study room, where a coder observed and annotated the responses of the child online. In every session, NAO asked the child to identify the type of prototypic facial expression (i.e. happy, sad, angry, and neutral) shown on five different photos. In the ‘baseline’ sessions we calculated the prior knowledge of every child about the emotion and facial expression concepts. In the ‘intervention’ sessions, NAO provides some verbal feedback (if needed), to help the child identify the facial expression. After finishing the intervention sessions, we included two ‘human-validation’ sessions (with no feedback) to evaluate how well the child can apply the learned concepts when a human is replaced with NAO. Results: The following Table demonstrates the mean and Standard Deviation (STD) of face recognition rates for all subjects in three phases of our study. In our experiment six out of seven subjects had baseline recognition rate lower than 80% and we observed high variation (STD) between different subjects. Facial Expression Recognition Rate (%) Baseline Intervention Human-Validation Mean (STD) 69.52 (36.28) 85.83 (20.54) 94.28 (15.11) Conclusions: The results demonstrate the effectiveness of NAO for teaching and improving facial expression recognition (FER) skills by children with ASD. More specifically, in the baseline, the low FER rate (69.52%) with high variability (STD=36.28) demonstrate that overall, participants had difficulty recognizing expressions. The statistical results of intervention phase, confirms that NAO can teach children recognizing facial expressions reliably (higher accuracy with lower STD). Interestingly, in the human-validation phase children could even recognize the basic facial expressions with a higher accuracy (94%) and very limited variability (STD = 15.11). These results conclude that robot-based feedback and intervention with a customized protocol can improve the learning capabilities and social skills of children with ASD

    Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    No full text
    Abstract — Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis ” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. I
    corecore